Goto

Collaborating Authors

 few-shot classification


ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion

Neural Information Processing Systems

Specifically, a set of prototypes is optimized to achieve per-task prototype overfit-ting, enabling accurately obtaining the overfitted prototypes for individual tasks. Furthermore, we introduce a task-guided diffusion process within the prototype space, enabling the meta-learning of a generative process that transitions from a vanilla prototype to an overfitted prototype.





TowardsPracticalFew-ShotQuerySets: TransductiveMinimumDescriptionLengthInference

Neural Information Processing Systems

Inparticular,foreach task at testing time, theclasses effectivelypresent intheunlabeled query setareknown a priori, and correspond exactly to the set of classes represented in the labeled supportset.




ee89223a2b625b5152132ed77abbcc79-Supplemental.pdf

Neural Information Processing Systems

The difference between the twodatasets comes from how CIFAR100 is split into meta-train / meta-validation / meta-test sets. Gradients have more noise due to less data variations, compared to higher resolution of miniImageNet images. The state-of-the-art method is shown to outperform ALFA+fo-Proto-MAML in Table D. Inthis section, we study howrobust the proposed meta-learner istochanges indomains, through additional experiments on cross-domain few-shot classification under similar settings to Section 4.3.2 During outer-loop optimization, 15 examples are sampled per each class forD0. All models were trained for 50000 iterations with the meta-batch size of 2 and 4 tasks for 5-shot and 1-shot, respectively.



AnEmbarrassinglySimpleApproachto Semi-SupervisedFew-ShotLearning

Neural Information Processing Systems

Themostpopular fashion of SSFSL is to predict unlabeled data with pseudo-labels by carefully devising tailored strategies, and then augment the extremely small support set of labeled data in few-shot classification,e.g., [9,15,36].